Search Results

Documents authored by Dexter, Gregory


Document
Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff

Published in: LIPIcs, Volume 287, 15th Innovations in Theoretical Computer Science Conference (ITCS 2024)


Abstract
Let S ∈ ℝ^{n × n} be any matrix satisfying ‖1-S‖₂ ≤ εn, where 1 is the all ones matrix and ‖⋅‖₂ is the spectral norm. It is well-known that there exists S with just O(n/ε²) non-zero entries achieving this guarantee: we can let 𝐒 be the scaled adjacency matrix of a Ramanujan expander graph. We show that, beyond giving a sparse approximation to the all ones matrix, 𝐒 yields a universal sparsifier for any positive semidefinite (PSD) matrix. In particular, for any PSD A ∈ ℝ^{n×n} which is normalized so that its entries are bounded in magnitude by 1, we show that ‖A-A∘S‖₂ ≤ ε n, where ∘ denotes the entrywise (Hadamard) product. Our techniques also yield universal sparsifiers for non-PSD matrices. In this case, we show that if S satisfies ‖1-S‖₂ ≤ (ε²n)/(c log²(1/ε)) for some sufficiently large constant c, then ‖A-A∘S‖₂ ≤ ε⋅max(n,‖ A‖₁), where ‖A‖₁ is the nuclear norm. Again letting 𝐒 be a scaled Ramanujan graph adjacency matrix, this yields a sparsifier with Õ(n/ε⁴) entries. We prove that the above universal sparsification bounds for both PSD and non-PSD matrices are tight up to logarithmic factors. Since 𝐀∘𝐒 can be constructed deterministically without reading all of A, our result for PSD matrices derandomizes and improves upon established results for randomized matrix sparsification, which require sampling a random subset of O((n log n)/ε²) entries and only give an approximation to any fixed A with high probability. We further show that any randomized algorithm must read at least Ω(n/ε²) entries to spectrally approximate general A to error εn, thus proving that these existing randomized algorithms are optimal up to logarithmic factors. We leverage our deterministic sparsification results to give the first {deterministic algorithms} for several problems, including singular value and singular vector approximation and positive semidefiniteness testing, that run in faster than matrix multiplication time. This partially addresses a significant gap between randomized and deterministic algorithms for fast linear algebraic computation. Finally, if A ∈ {-1,0,1}^{n × n} is PSD, we show that a spectral approximation à with ‖A-Ã‖₂ ≤ ε n can be obtained by deterministically reading Õ(n/ε) entries of A. This improves the 1/ε dependence on our result for general PSD matrices by a quadratic factor and is information-theoretically optimal up to a logarithmic factor.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Cameron Musco, Archan Ray, Sushant Sachdeva, and David P. Woodruff. Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra. In 15th Innovations in Theoretical Computer Science Conference (ITCS 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 287, pp. 13:1-13:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ITCS.2024.13,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Musco, Cameron and Ray, Archan and Sachdeva, Sushant and Woodruff, David P.},
  title =	{{Universal Matrix Sparsifiers and Fast Deterministic Algorithms for Linear Algebra}},
  booktitle =	{15th Innovations in Theoretical Computer Science Conference (ITCS 2024)},
  pages =	{13:1--13:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-309-6},
  ISSN =	{1868-8969},
  year =	{2024},
  volume =	{287},
  editor =	{Guruswami, Venkatesan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ITCS.2024.13},
  URN =		{urn:nbn:de:0030-drops-195415},
  doi =		{10.4230/LIPIcs.ITCS.2024.13},
  annote =	{Keywords: sublinear algorithms, randomized linear algebra, spectral sparsification, expanders}
}
Document
Track A: Algorithms, Complexity and Games
Sublinear Time Eigenvalue Approximation via Random Sampling

Authors: Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco, and Archan Ray

Published in: LIPIcs, Volume 261, 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)


Abstract
We study the problem of approximating the eigenspectrum of a symmetric matrix A ∈ ℝ^{n×n} with bounded entries (i.e., ‖A‖_∞ ≤ 1). We present a simple sublinear time algorithm that approximates all eigenvalues of A up to additive error ±εn using those of a randomly sampled Õ((log³ n)/ε³)×Õ((log³ n)/ε³) principal submatrix. Our result can be viewed as a concentration bound on the complete eigenspectrum of a random submatrix, significantly extending known bounds on just the singular values (the magnitudes of the eigenvalues). We give improved error bounds of ± ε √{nnz(A)} and ±ε‖A‖_F when the rows of A can be sampled with probabilities proportional to their sparsities or their squared 𝓁₂ norms respectively. Here nnz(A) is the number of non-zero entries in A and ‖A‖_F is its Frobenius norm. Even for the strictly easier problems of approximating the singular values or testing the existence of large negative eigenvalues (Bakshi, Chepurko, and Jayaram, FOCS '20), our results are the first that take advantage of non-uniform sampling to give improved error bounds. From a technical perspective, our results require several new eigenvalue concentration and perturbation bounds for matrices with bounded entries. Our non-uniform sampling bounds require a new algorithmic approach, which judiciously zeroes out entries of a randomly sampled submatrix to reduce variance, before computing the eigenvalues of that submatrix as estimates for those of A. We complement our theoretical results with numerical simulations, which demonstrate the effectiveness of our algorithms in practice.

Cite as

Rajarshi Bhattacharjee, Gregory Dexter, Petros Drineas, Cameron Musco, and Archan Ray. Sublinear Time Eigenvalue Approximation via Random Sampling. In 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 261, pp. 21:1-21:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bhattacharjee_et_al:LIPIcs.ICALP.2023.21,
  author =	{Bhattacharjee, Rajarshi and Dexter, Gregory and Drineas, Petros and Musco, Cameron and Ray, Archan},
  title =	{{Sublinear Time Eigenvalue Approximation via Random Sampling}},
  booktitle =	{50th International Colloquium on Automata, Languages, and Programming (ICALP 2023)},
  pages =	{21:1--21:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-278-5},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{261},
  editor =	{Etessami, Kousha and Feige, Uriel and Puppis, Gabriele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2023.21},
  URN =		{urn:nbn:de:0030-drops-180733},
  doi =		{10.4230/LIPIcs.ICALP.2023.21},
  annote =	{Keywords: sublinear algorithms, eigenvalue approximation, randomized linear algebra}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail